Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@heiseonline@social.heise.de
2024-05-04 15:46:00

AngryGF: KI-Chatbot für Männer soll beim Umgang mit wütender Freundin helfen
Die KI-Chat-App "AngryGF" will Männer helfen, wütende Freundinnen zu beruhigen. Ein Übungsszenario lautet: "Mutter zuerst aus dem Fluss gerettet".

@heiseonline@social.heise.de
2024-05-04 15:46:00

AngryGF: KI-Chatbot für Männer soll beim Umgang mit wütender Freundin helfen
Die KI-Chat-App "AngryGF" will Männer helfen, wütende Freundinnen zu beruhigen. Ein Übungsszenario lautet: "Mutter zuerst aus dem Fluss gerettet".

@crell@phpc.social
2024-02-05 12:55:01

Using #PHP arrays as pseudo-objects is almost never the right answer. They're less self-documenting, slower, worse on memory, and more bug prone.
peakd.com/php/@crell/php-use-a

@crell@phpc.social
2024-02-05 12:55:01

Using #PHP arrays as pseudo-objects is almost never the right answer. They're less self-documenting, slower, worse on memory, and more bug prone.
peakd.com/php/@crell/php-use-a

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:49:06

When to Retrieve: Teaching LLMs to Utilize Information Retrieval Effectively
Tiziano Labruna, Jon Ander Campos, Gorka Azkune
arxiv.org/abs/2404.19705 arxiv.org/pdf/2404.19705
arXiv:2404.19705v1 Announce Type: new
Abstract: In this paper, we demonstrate how Large Language Models (LLMs) can effectively learn to use an off-the-shelf information retrieval (IR) system specifically when additional context is required to answer a given question. Given the performance of IR systems, the optimal strategy for question answering does not always entail external information retrieval; rather, it often involves leveraging the parametric memory of the LLM itself. Prior research has identified this phenomenon in the PopQA dataset, wherein the most popular questions are effectively addressed using the LLM's parametric memory, while less popular ones require IR system usage. Following this, we propose a tailored training approach for LLMs, leveraging existing open-domain question answering datasets. Here, LLMs are trained to generate a special token, , when they do not know the answer to a question. Our evaluation of the Adaptive Retrieval LLM (Adapt-LLM) on the PopQA dataset showcases improvements over the same LLM under three configurations: (i) retrieving information for all the questions, (ii) using always the parametric memory of the LLM, and (iii) using a popularity threshold to decide when to use a retriever. Through our analysis, we demonstrate that Adapt-LLM is able to generate the token when it determines that it does not know how to answer a question, indicating the need for IR, while it achieves notably high accuracy levels when it chooses to rely only on its parametric memory.

@arXiv_csIR_bot@mastoxiv.page
2024-03-04 08:31:44

This arxiv.org/abs/2310.16605 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_qbioNC_bot@mastoxiv.page
2024-04-05 07:08:33

Information Theory and Direction Selectivity
Aman Chawla
arxiv.org/abs/2404.02915 arxiv.org/pdf/2404.02915

@andres4ny@social.ridetrans.it
2024-04-30 21:50:18

Ah, a Lactationally-Transmitted Infection (or disease). LTI (or LTD)
statnews.com/2024/04/30/h5n1-b

So what is the main driver of infection among cows?

While the contribution of respiratory transmission is still in question, there appears to be little doubt that a lot of spread is happening in milking parlors, where cows are strapped into the milking machines, and that in dairy cows, H5N1 seems to be primarily infecting mammary glands. The amount of virus in the udders of infected cows is off-the-charts high, making it easy to see how one cow’s infection soon becomes a herd’s problem.

“…
@arXiv_csCL_bot@mastoxiv.page
2024-03-01 06:53:33

Memory-Augmented Generative Adversarial Transformers
Stephan Raaijmakers, Roos Bakker, Anita Cremers, Roy de Kleijn, Tom Kouwenhoven, Tessa Verhoef
arxiv.org/abs/2402.19218